Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 50.783
Filter
1.
Int J Oral Sci ; 16(1): 34, 2024 May 08.
Article in English | MEDLINE | ID: mdl-38719817

ABSTRACT

Accurate segmentation of oral surgery-related tissues from cone beam computed tomography (CBCT) images can significantly accelerate treatment planning and improve surgical accuracy. In this paper, we propose a fully automated tissue segmentation system for dental implant surgery. Specifically, we propose an image preprocessing method based on data distribution histograms, which can adaptively process CBCT images with different parameters. Based on this, we use the bone segmentation network to obtain the segmentation results of alveolar bone, teeth, and maxillary sinus. We use the tooth and mandibular regions as the ROI regions of tooth segmentation and mandibular nerve tube segmentation to achieve the corresponding tasks. The tooth segmentation results can obtain the order information of the dentition. The corresponding experimental results show that our method can achieve higher segmentation accuracy and efficiency compared to existing methods. Its average Dice scores on the tooth, alveolar bone, maxillary sinus, and mandibular canal segmentation tasks were 96.5%, 95.4%, 93.6%, and 94.8%, respectively. These results demonstrate that it can accelerate the development of digital dentistry.


Subject(s)
Cone-Beam Computed Tomography , Cone-Beam Computed Tomography/methods , Humans , Alveolar Process/diagnostic imaging , Image Processing, Computer-Assisted/methods , Artificial Intelligence , Maxillary Sinus/diagnostic imaging , Maxillary Sinus/surgery , Mandible/diagnostic imaging , Mandible/surgery , Tooth/diagnostic imaging
2.
Sci Rep ; 14(1): 10569, 2024 05 08.
Article in English | MEDLINE | ID: mdl-38719918

ABSTRACT

Within the medical field of human assisted reproductive technology, a method for interpretable, non-invasive, and objective oocyte evaluation is lacking. To address this clinical gap, a workflow utilizing machine learning techniques has been developed involving automatic multi-class segmentation of two-dimensional images, morphometric analysis, and prediction of developmental outcomes of mature denuded oocytes based on feature extraction and clinical variables. Two separate models have been developed for this purpose-a model to perform multiclass segmentation, and a classifier model to classify oocytes as likely or unlikely to develop into a blastocyst (Day 5-7 embryo). The segmentation model is highly accurate at segmenting the oocyte, ensuring high-quality segmented images (masks) are utilized as inputs for the classifier model (mask model). The mask model displayed an area under the curve (AUC) of 0.63, a sensitivity of 0.51, and a specificity of 0.66 on the test set. The AUC underwent a reduction to 0.57 when features extracted from the ooplasm were removed, suggesting the ooplasm holds the information most pertinent to oocyte developmental competence. The mask model was further compared to a deep learning model, which also utilized the segmented images as inputs. The performance of both models combined in an ensemble model was evaluated, showing an improvement (AUC 0.67) compared to either model alone. The results of this study indicate that direct assessments of the oocyte are warranted, providing the first objective insights into key features for developmental competence, a step above the current standard of care-solely utilizing oocyte age as a proxy for quality.


Subject(s)
Blastocyst , Machine Learning , Oocytes , Humans , Blastocyst/cytology , Blastocyst/physiology , Oocytes/cytology , Female , Embryonic Development , Adult , Fertilization in Vitro/methods , Image Processing, Computer-Assisted/methods
3.
J Transl Med ; 22(1): 434, 2024 May 08.
Article in English | MEDLINE | ID: mdl-38720370

ABSTRACT

BACKGROUND: Cardiometabolic disorders pose significant health risks globally. Metabolic syndrome, characterized by a cluster of potentially reversible metabolic abnormalities, is a known risk factor for these disorders. Early detection and intervention for individuals with metabolic abnormalities can help mitigate the risk of developing more serious cardiometabolic conditions. This study aimed to develop an image-derived phenotype (IDP) for metabolic abnormality from unenhanced abdominal computed tomography (CT) scans using deep learning. We used this IDP to classify individuals with metabolic syndrome and predict future occurrence of cardiometabolic disorders. METHODS: A multi-stage deep learning approach was used to extract the IDP from the liver region of unenhanced abdominal CT scans. In a cohort of over 2,000 individuals the IDP was used to classify individuals with metabolic syndrome. In a subset of over 1,300 individuals, the IDP was used to predict future occurrence of hypertension, type II diabetes, and fatty liver disease. RESULTS: For metabolic syndrome (MetS) classification, we compared the performance of the proposed IDP to liver attenuation and visceral adipose tissue area (VAT). The proposed IDP showed the strongest performance (AUC 0.82) compared to attenuation (AUC 0.70) and VAT (AUC 0.80). For disease prediction, we compared the performance of the IDP to baseline MetS diagnosis. The models including the IDP outperformed MetS for type II diabetes (AUCs 0.91 and 0.90) and fatty liver disease (AUCs 0.67 and 0.62) prediction and performed comparably for hypertension prediction (AUCs of 0.77). CONCLUSIONS: This study demonstrated the superior performance of a deep learning IDP compared to traditional radiomic features to classify individuals with metabolic syndrome. Additionally, the IDP outperformed the clinical definition of metabolic syndrome in predicting future morbidities. Our findings underscore the utility of data-driven imaging phenotypes as valuable tools in the assessment and management of metabolic syndrome and cardiometabolic disorders.


Subject(s)
Deep Learning , Metabolic Syndrome , Phenotype , Humans , Metabolic Syndrome/diagnostic imaging , Metabolic Syndrome/complications , Female , Male , Middle Aged , Tomography, X-Ray Computed , Cardiovascular Diseases/diagnostic imaging , Adult , Image Processing, Computer-Assisted/methods
4.
Cancer Imaging ; 24(1): 60, 2024 May 09.
Article in English | MEDLINE | ID: mdl-38720391

ABSTRACT

BACKGROUND: This study systematically compares the impact of innovative deep learning image reconstruction (DLIR, TrueFidelity) to conventionally used iterative reconstruction (IR) on nodule volumetry and subjective image quality (IQ) at highly reduced radiation doses. This is essential in the context of low-dose CT lung cancer screening where accurate volumetry and characterization of pulmonary nodules in repeated CT scanning are indispensable. MATERIALS AND METHODS: A standardized CT dataset was established using an anthropomorphic chest phantom (Lungman, Kyoto Kaguku Inc., Kyoto, Japan) containing a set of 3D-printed lung nodules including six diameters (4 to 9 mm) and three morphology classes (lobular, spiculated, smooth), with an established ground truth. Images were acquired at varying radiation doses (6.04, 3.03, 1.54, 0.77, 0.41 and 0.20 mGy) and reconstructed with combinations of reconstruction kernels (soft and hard kernel) and reconstruction algorithms (ASIR-V and DLIR at low, medium and high strength). Semi-automatic volumetry measurements and subjective image quality scores recorded by five radiologists were analyzed with multiple linear regression and mixed-effect ordinal logistic regression models. RESULTS: Volumetric errors of nodules imaged with DLIR are up to 50% lower compared to ASIR-V, especially at radiation doses below 1 mGy and when reconstructed with a hard kernel. Also, across all nodule diameters and morphologies, volumetric errors are commonly lower with DLIR. Furthermore, DLIR renders higher subjective IQ, especially at the sub-mGy doses. Radiologists were up to nine times more likely to score the highest IQ-score to these images compared to those reconstructed with ASIR-V. Lung nodules with irregular margins and small diameters also had an increased likelihood (up to five times more likely) to be ascribed the best IQ scores when reconstructed with DLIR. CONCLUSION: We observed that DLIR performs as good as or even outperforms conventionally used reconstruction algorithms in terms of volumetric accuracy and subjective IQ of nodules in an anthropomorphic chest phantom. As such, DLIR potentially allows to lower the radiation dose to participants of lung cancer screening without compromising accurate measurement and characterization of lung nodules.


Subject(s)
Deep Learning , Lung Neoplasms , Multiple Pulmonary Nodules , Phantoms, Imaging , Radiation Dosage , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Multiple Pulmonary Nodules/diagnostic imaging , Multiple Pulmonary Nodules/pathology , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Solitary Pulmonary Nodule/diagnostic imaging , Solitary Pulmonary Nodule/pathology , Radiographic Image Interpretation, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods
5.
F1000Res ; 13: 274, 2024.
Article in English | MEDLINE | ID: mdl-38725640

ABSTRACT

Background: The most recent advances in Computed Tomography (CT) image reconstruction technology are Deep learning image reconstruction (DLIR) algorithms. Due to drawbacks in Iterative reconstruction (IR) techniques such as negative image texture and nonlinear spatial resolutions, DLIRs are gradually replacing them. However, the potential use of DLIR in Head and Chest CT has to be examined further. Hence, the purpose of the study is to review the influence of DLIR on Radiation dose (RD), Image noise (IN), and outcomes of the studies compared with IR and FBP in Head and Chest CT examinations. Methods: We performed a detailed search in PubMed, Scopus, Web of Science, Cochrane Library, and Embase to find the articles reported using DLIR for Head and Chest CT examinations between 2017 to 2023. Data were retrieved from the short-listed studies using Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) guidelines. Results: Out of 196 articles searched, 15 articles were included. A total of 1292 sample size was included. 14 articles were rated as high and 1 article as moderate quality. All studies compared DLIR to IR techniques. 5 studies compared DLIR with IR and FBP. The review showed that DLIR improved IQ, and reduced RD and IN for CT Head and Chest examinations. Conclusions: DLIR algorithm have demonstrated a noted enhancement in IQ with reduced IN for CT Head and Chest examinations at lower dose compared with IR and FBP. DLIR showed potential for enhancing patient care by reducing radiation risks and increasing diagnostic accuracy.


Subject(s)
Algorithms , Deep Learning , Head , Radiation Dosage , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Head/diagnostic imaging , Image Processing, Computer-Assisted/methods , Thorax/diagnostic imaging , Radiography, Thoracic/methods , Signal-To-Noise Ratio
6.
Hum Brain Mapp ; 45(7): e26695, 2024 May.
Article in English | MEDLINE | ID: mdl-38727010

ABSTRACT

Human infancy is marked by fastest postnatal brain structural changes. It also coincides with the onset of many neurodevelopmental disorders. Atlas-based automated structure labeling has been widely used for analyzing various neuroimaging data. However, the relatively large and nonlinear neuroanatomical differences between infant and adult brains can lead to significant offsets of the labeled structures in infant brains when adult brain atlas is used. Age-specific 1- and 2-year-old brain atlases covering all major gray and white matter (GM and WM) structures with diffusion tensor imaging (DTI) and structural MRI are critical for precision medicine for infant population yet have not been established. In this study, high-quality DTI and structural MRI data were obtained from 50 healthy children to build up three-dimensional age-specific 1- and 2-year-old brain templates and atlases. Age-specific templates include a single-subject template as well as two population-averaged templates from linear and nonlinear transformation, respectively. Each age-specific atlas consists of 124 comprehensively labeled major GM and WM structures, including 52 cerebral cortical, 10 deep GM, 40 WM, and 22 brainstem and cerebellar structures. When combined with appropriate registration methods, the established atlases can be used for highly accurate automatic labeling of any given infant brain MRI. We demonstrated that one can automatically and effectively delineate deep WM microstructural development from 3 to 38 months by using these age-specific atlases. These established 1- and 2-year-old infant brain DTI atlases can advance our understanding of typical brain development and serve as clinical anatomical references for brain disorders during infancy.


Subject(s)
Atlases as Topic , Brain , Diffusion Tensor Imaging , Gray Matter , White Matter , Humans , Infant , Child, Preschool , Male , White Matter/diagnostic imaging , White Matter/anatomy & histology , White Matter/growth & development , Female , Gray Matter/diagnostic imaging , Gray Matter/growth & development , Gray Matter/anatomy & histology , Diffusion Tensor Imaging/methods , Brain/diagnostic imaging , Brain/growth & development , Brain/anatomy & histology , Image Processing, Computer-Assisted/methods
7.
Hum Brain Mapp ; 45(7): e26697, 2024 May.
Article in English | MEDLINE | ID: mdl-38726888

ABSTRACT

Diffusion MRI with free gradient waveforms, combined with simultaneous relaxation encoding, referred to as multidimensional MRI (MD-MRI), offers microstructural specificity in complex biological tissue. This approach delivers intravoxel information about the microstructure, local chemical composition, and importantly, how these properties are coupled within heterogeneous tissue containing multiple microenvironments. Recent theoretical advances incorporated diffusion time dependency and integrated MD-MRI with concepts from oscillating gradients. This framework probes the diffusion frequency, ω $$ \omega $$ , in addition to the diffusion tensor, D $$ \mathbf{D} $$ , and relaxation, R 1 $$ {R}_1 $$ , R 2 $$ {R}_2 $$ , correlations. A D ω - R 1 - R 2 $$ \mathbf{D}\left(\omega \right)-{R}_1-{R}_2 $$ clinical imaging protocol was then introduced, with limited brain coverage and 3 mm3 voxel size, which hinder brain segmentation and future cohort studies. In this study, we introduce an efficient, sparse in vivo MD-MRI acquisition protocol providing whole brain coverage at 2 mm3 voxel size. We demonstrate its feasibility and robustness using a well-defined phantom and repeated scans of five healthy individuals. Additionally, we test different denoising strategies to address the sparse nature of this protocol, and show that efficient MD-MRI encoding design demands a nuanced denoising approach. The MD-MRI framework provides rich information that allows resolving the diffusion frequency dependence into intravoxel components based on their D ω - R 1 - R 2 $$ \mathbf{D}\left(\omega \right)-{R}_1-{R}_2 $$ distribution, enabling the creation of microstructure-specific maps in the human brain. Our results encourage the broader adoption and use of this new imaging approach for characterizing healthy and pathological tissues.


Subject(s)
Image Processing, Computer-Assisted , Humans , Adult , Image Processing, Computer-Assisted/methods , Diffusion Magnetic Resonance Imaging/methods , Brain/diagnostic imaging , Male , Female , Diffusion Tensor Imaging/methods , Young Adult
8.
PLoS One ; 19(5): e0302067, 2024.
Article in English | MEDLINE | ID: mdl-38728318

ABSTRACT

Many lumbar spine diseases are caused by defects or degeneration of lumbar intervertebral discs (IVD) and are usually diagnosed through inspection of the patient's lumbar spine MRI. Efficient and accurate assessments of the lumbar spine are essential but a challenge due to the size of the clinical radiologist workforce not keeping pace with the demand for radiology services. In this paper, we present a methodology to automatically annotate lumbar spine IVDs with their height and degenerative state which is quantified using the Pfirrmann grading system. The method starts with semantic segmentation of a mid-sagittal MRI image into six distinct non-overlapping regions, including the IVD and vertebrae regions. Each IVD region is then located and assigned with its label. Using geometry, a line segment bisecting the IVD is determined and its Euclidean distance is used as the IVD height. We then extract an image feature, called self-similar color correlogram, from the nucleus of the IVD region as a representation of the region's spatial pixel intensity distribution. We then use the IVD height data and machine learning classification process to predict the Pfirrmann grade of the IVD. We considered five different deep learning networks and six different machine learning algorithms in our experiment and found the ResNet-50 model and Ensemble of Decision Trees classifier to be the combination that gives the best results. When tested using a dataset containing 515 MRI studies, we achieved a mean accuracy of 88.1%.


Subject(s)
Intervertebral Disc , Lumbar Vertebrae , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Lumbar Vertebrae/diagnostic imaging , Intervertebral Disc/diagnostic imaging , Intervertebral Disc Degeneration/diagnostic imaging , Intervertebral Disc Degeneration/pathology , Machine Learning , Male , Female , Middle Aged , Image Processing, Computer-Assisted/methods , Adult
9.
Sci Rep ; 14(1): 10664, 2024 05 09.
Article in English | MEDLINE | ID: mdl-38724603

ABSTRACT

Kiwifruit soft rot is highly contagious and causes serious economic loss. Therefore, early detection and elimination of soft rot are important for postharvest treatment and storage of kiwifruit. This study aims to accurately detect kiwifruit soft rot based on hyperspectral images by using a deep learning approach for image classification. A dual-branch selective attention capsule network (DBSACaps) was proposed to improve the classification accuracy. The network uses two branches to separately extract the spectral and spatial features so as to reduce their mutual interference, followed by fusion of the two features through the attention mechanism. Capsule network was used instead of convolutional neural networks to extract the features and complete the classification. Compared with existing methods, the proposed method exhibited the best classification performance on the kiwifruit soft rot dataset, with an overall accuracy of 97.08% and a 97.83% accuracy for soft rot. Our results confirm that potential soft rot of kiwifruit can be detected using hyperspectral images, which may contribute to the construction of smart agriculture.


Subject(s)
Actinidia , Neural Networks, Computer , Plant Diseases , Actinidia/microbiology , Plant Diseases/microbiology , Deep Learning , Hyperspectral Imaging/methods , Fruit/microbiology , Image Processing, Computer-Assisted/methods
10.
Nat Commun ; 15(1): 3942, 2024 May 10.
Article in English | MEDLINE | ID: mdl-38729933

ABSTRACT

In clinical oncology, many diagnostic tasks rely on the identification of cells in histopathology images. While supervised machine learning techniques necessitate the need for labels, providing manual cell annotations is time-consuming. In this paper, we propose a self-supervised framework (enVironment-aware cOntrastive cell represenTation learning: VOLTA) for cell representation learning in histopathology images using a technique that accounts for the cell's mutual relationship with its environment. We subject our model to extensive experiments on data collected from multiple institutions comprising over 800,000 cells and six cancer types. To showcase the potential of our proposed framework, we apply VOLTA to ovarian and endometrial cancers and demonstrate that our cell representations can be utilized to identify the known histotypes of ovarian cancer and provide insights that link histopathology and molecular subtypes of endometrial cancer. Unlike supervised models, we provide a framework that can empower discoveries without any annotation data, even in situations where sample sizes are limited.


Subject(s)
Endometrial Neoplasms , Ovarian Neoplasms , Humans , Female , Endometrial Neoplasms/pathology , Ovarian Neoplasms/pathology , Machine Learning , Supervised Machine Learning , Algorithms , Image Processing, Computer-Assisted/methods
11.
Sci Rep ; 14(1): 10753, 2024 05 10.
Article in English | MEDLINE | ID: mdl-38730248

ABSTRACT

This paper proposes an approach to enhance the differentiation task between benign and malignant Breast Tumors (BT) using histopathology images from the BreakHis dataset. The main stages involve preprocessing, which encompasses image resizing, data partitioning (training and testing sets), followed by data augmentation techniques. Both feature extraction and classification tasks are employed by a Custom CNN. The experimental results show that the proposed approach using the Custom CNN model exhibits better performance with an accuracy of 84% than applying the same approach using other pretrained models, including MobileNetV3, EfficientNetB0, Vgg16, and ResNet50V2, that present relatively lower accuracies, ranging from 74 to 82%; these four models are used as both feature extractors and classifiers. To increase the accuracy and other performance metrics, Grey Wolf Optimization (GWO), and Modified Gorilla Troops Optimization (MGTO) metaheuristic optimizers are applied to each model separately for hyperparameter tuning. In this case, the experimental results show that the Custom CNN model, refined with MGTO optimization, reaches an exceptional accuracy of 93.13% in just 10 iterations, outperforming the other state-of-the-art methods, and the other four used pretrained models based on the BreakHis dataset.


Subject(s)
Breast Neoplasms , Deep Learning , Humans , Breast Neoplasms/classification , Breast Neoplasms/pathology , Breast Neoplasms/diagnosis , Female , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Algorithms
12.
Methods Cell Biol ; 186: 213-231, 2024.
Article in English | MEDLINE | ID: mdl-38705600

ABSTRACT

Advancements in multiplexed tissue imaging technologies are vital in shaping our understanding of tissue microenvironmental influences in disease contexts. These technologies now allow us to relate the phenotype of individual cells to their higher-order roles in tissue organization and function. Multiplexed Ion Beam Imaging (MIBI) is one of such technologies, which uses metal isotope-labeled antibodies and secondary ion mass spectrometry (SIMS) to image more than 40 protein markers simultaneously within a single tissue section. Here, we describe an optimized MIBI workflow for high-plex analysis of Formalin-Fixed Paraffin-Embedded (FFPE) tissues following antigen retrieval, metal isotope-conjugated antibody staining, imaging using the MIBI instrument, and subsequent data processing and analysis. While this workflow is focused on imaging human FFPE samples using the MIBI, this workflow can be easily extended to model systems, biological questions, and multiplexed imaging modalities.


Subject(s)
Paraffin Embedding , Humans , Paraffin Embedding/methods , Spectrometry, Mass, Secondary Ion/methods , Tissue Fixation/methods , Image Processing, Computer-Assisted/methods , Formaldehyde/chemistry
13.
BMC Oral Health ; 24(1): 521, 2024 May 03.
Article in English | MEDLINE | ID: mdl-38698377

ABSTRACT

BACKGROUND: Oral mucosal diseases are similar to the surrounding normal tissues, i.e., their many non-salient features, which poses a challenge for accurate segmentation lesions. Additionally, high-precision large models generate too many parameters, which puts pressure on storage and makes it difficult to deploy on portable devices. METHODS: To address these issues, we design a non-salient target segmentation model (NTSM) to improve segmentation performance while reducing the number of parameters. The NTSM includes a difference association (DA) module and multiple feature hierarchy pyramid attention (FHPA) modules. The DA module enhances feature differences at different levels to learn local context information and extend the segmentation mask to potentially similar areas. It also learns logical semantic relationship information through different receptive fields to determine the actual lesions and further elevates the segmentation performance of non-salient lesions. The FHPA module extracts pathological information from different views by performing the hadamard product attention (HPA) operation on input features, which reduces the number of parameters. RESULTS: The experimental results on the oral mucosal diseases (OMD) dataset and international skin imaging collaboration (ISIC) dataset demonstrate that our model outperforms existing state-of-the-art methods. Compared with the nnU-Net backbone, our model has 43.20% fewer parameters while still achieving a 3.14% increase in the Dice score. CONCLUSIONS: Our model has high segmentation accuracy on non-salient areas of oral mucosal diseases and can effectively reduce resource consumption.


Subject(s)
Mouth Diseases , Mouth Mucosa , Humans , Mouth Diseases/diagnostic imaging , Mouth Mucosa/pathology , Mouth Mucosa/diagnostic imaging , Image Processing, Computer-Assisted/methods
14.
Sci Rep ; 14(1): 10483, 2024 05 07.
Article in English | MEDLINE | ID: mdl-38714764

ABSTRACT

Automated machine learning (AutoML) allows for the simplified application of machine learning to real-world problems, by the implicit handling of necessary steps such as data pre-processing, feature engineering, model selection and hyperparameter optimization. This has encouraged its use in medical applications such as imaging. However, the impact of common parameter choices such as the number of trials allowed, and the resolution of the input images, has not been comprehensively explored in existing literature. We therefore benchmark AutoKeras (AK), an open-source AutoML framework, against several bespoke deep learning architectures, on five public medical datasets representing a wide range of imaging modalities. It was found that AK could outperform the bespoke models in general, although at the cost of increased training time. Moreover, our experiments suggest that a large number of trials and higher resolutions may not be necessary for optimal performance to be achieved.


Subject(s)
Machine Learning , Humans , Image Processing, Computer-Assisted/methods , Diagnostic Imaging/methods , Deep Learning , Algorithms
15.
Sci Rep ; 14(1): 10471, 2024 05 07.
Article in English | MEDLINE | ID: mdl-38714840

ABSTRACT

Lung diseases globally impose a significant pathological burden and mortality rate, particularly the differential diagnosis between adenocarcinoma, squamous cell carcinoma, and small cell lung carcinoma, which is paramount in determining optimal treatment strategies and improving clinical prognoses. Faced with the challenge of improving diagnostic precision and stability, this study has developed an innovative deep learning-based model. This model employs a Feature Pyramid Network (FPN) and Squeeze-and-Excitation (SE) modules combined with a Residual Network (ResNet18), to enhance the processing capabilities for complex images and conduct multi-scale analysis of each channel's importance in classifying lung cancer. Moreover, the performance of the model is further enhanced by employing knowledge distillation from larger teacher models to more compact student models. Subjected to rigorous five-fold cross-validation, our model outperforms existing models on all performance metrics, exhibiting exceptional diagnostic accuracy. Ablation studies on various model components have verified that each addition effectively improves model performance, achieving an average accuracy of 98.84% and a Matthews Correlation Coefficient (MCC) of 98.83%. Collectively, the results indicate that our model significantly improves the accuracy of disease diagnosis, providing physicians with more precise clinical decision-making support.


Subject(s)
Deep Learning , Lung Neoplasms , Neural Networks, Computer , Humans , Lung Neoplasms/pathology , Lung Neoplasms/diagnosis , Lung Neoplasms/classification , Small Cell Lung Carcinoma/diagnosis , Small Cell Lung Carcinoma/pathology , Small Cell Lung Carcinoma/classification , Carcinoma, Squamous Cell/diagnosis , Carcinoma, Squamous Cell/pathology , Adenocarcinoma/pathology , Adenocarcinoma/diagnosis , Adenocarcinoma/classification , Image Processing, Computer-Assisted/methods , Diagnosis, Differential
16.
J Hematol Oncol ; 17(1): 27, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38693553

ABSTRACT

The rapid advancements in large language models (LLMs) such as ChatGPT have raised concerns about their potential impact on academic integrity. While initial concerns focused on ChatGPT's writing capabilities, recent updates have integrated DALL-E 3's image generation features, extending the risks to visual evidence in biomedical research. Our tests revealed ChatGPT's nearly barrier-free image generation feature can be used to generate experimental result images, such as blood smears, Western Blot, immunofluorescence and so on. Although the current ability of ChatGPT to generate experimental images is limited, the risk of misuse is evident. This development underscores the need for immediate action. We suggest that AI providers restrict the generation of experimental image, develop tools to detect AI-generated images, and consider adding "invisible watermarks" to the generated images. By implementing these measures, we can better ensure the responsible use of AI technology in academic research and maintain the integrity of scientific evidence.


Subject(s)
Biomedical Research , Humans , Biomedical Research/methods , Image Processing, Computer-Assisted/methods , Artificial Intelligence , Software
17.
Braz Oral Res ; 38: e032, 2024.
Article in English | MEDLINE | ID: mdl-38747819

ABSTRACT

This study assessed the reliability of a color measurement method using images obtained from a charge-coupled device (CCD) camera and a stereoscopic loupe. Disc-shaped specimens were created using the composite Filtek Z350 XT (shades DA1, DA2, DA3, and DA4) (n = 3). CIELAB color coordinates of the specimens were measured using the spectrophotometer SP60 over white and black backgrounds. Images of the same specimens were taken using a CCD camera attached to a stereoscopic loupe. The color of the image was measured (red-green-blue [RGB]) using an image processing software and converted to CIELAB coordinates. For each color coordinate, data from images were adjusted using linear regressions predicting those values from SP60. The whiteness index for dentistry (WID) and translucency parameter (TP00) of the specimens as well as the color differences (ΔE00) among pairwise shades were calculated. Data were analyzed via repeated-measures analysis of variance and Tukey's post hoc test (α = 0.05). Images obtained using the loupe tended to be darker and redder than the actual color. Data adjustment resulted in similar WID, ΔE00, and TP00 values to those observed for the spectrophotometer. Differences were observed only for the WID of shade DA3 and ΔE00 for comparing DA1 and DA3 over the black background. However, these differences were not clinically relevant. The use of adjusted data from images taken using a stereoscopic loupe is considered a feasible method for color measurement.


Subject(s)
Color , Colorimetry , Composite Resins , Materials Testing , Spectrophotometry , Reproducibility of Results , Composite Resins/chemistry , Spectrophotometry/methods , Colorimetry/methods , Colorimetry/instrumentation , Analysis of Variance , Reference Values , Linear Models , Image Processing, Computer-Assisted/methods
18.
Opt Lett ; 49(10): 2621-2624, 2024 May 15.
Article in English | MEDLINE | ID: mdl-38748120

ABSTRACT

Fluorescence fluctuation super-resolution microscopy (FF-SRM) has emerged as a promising method for the fast, low-cost, and uncomplicated imaging of biological specimens beyond the diffraction limit. Among FF-SRM techniques, super-resolution radial fluctuation (SRRF) microscopy is a popular technique but is prone to artifacts, resulting in low fidelity, especially under conditions of high-density fluorophores. In this Letter, we developed a novel, to the best of our knowledge, combinatory computational super-resolution microscopy method, namely VeSRRF, that demonstrated superior performance in SRRF microscopy. VeSRRF combined intensity and gradient variance reweighted radial fluctuations (VRRF) and enhanced-SRRF (eSRRF) algorithms, leveraging the enhanced resolution achieved through intensity and gradient variance analysis in VRRF and the improved fidelity obtained from the radial gradient convergence transform in eSRRF. Our method was validated using microtubules in mammalian cells as a standard biological model system. Our results demonstrated that VeSRRF consistently achieved the highest resolution and exceptional fidelity compared to those obtained from other algorithms in both single-molecule localization microscopy (SMLM) and FF-SRM. Moreover, we developed the VeSRRF software package that is freely available on the open-source ImageJ/Fiji software platform to facilitate the use of VeSRRF in the broader community of biomedical researchers. VeSRRF is an exemplary method in which complementary microscopy techniques are integrated holistically, creating superior imaging performance and capabilities.


Subject(s)
Algorithms , Microscopy, Fluorescence , Microscopy, Fluorescence/methods , Microtubules , Image Processing, Computer-Assisted/methods , Animals , Software
19.
Opt Lett ; 49(10): 2729-2732, 2024 May 15.
Article in English | MEDLINE | ID: mdl-38748147

ABSTRACT

In recent years, the emergence of a variety of novel optical microscopy techniques has enabled the generation of virtual optical stains of unlabeled tissue specimens, which have the potential to transform existing clinical histopathology workflows. In this work, we present a simultaneous deep ultraviolet transmission and scattering microscopy system that can produce virtual histology images that show concordance to conventional gold-standard histological processing techniques. The results of this work demonstrate the system's diagnostic potential for characterizing unlabeled thin tissue sections and streamlining histological workflows.


Subject(s)
Microscopy, Ultraviolet , Microscopy, Ultraviolet/methods , Humans , Ultraviolet Rays , Microscopy/methods , Image Processing, Computer-Assisted/methods
20.
PLoS One ; 19(5): e0301134, 2024.
Article in English | MEDLINE | ID: mdl-38743645

ABSTRACT

Land cover classification (LCC) is of paramount importance for assessing environmental changes in remote sensing images (RSIs) as it involves assigning categorical labels to ground objects. The growing availability of multi-source RSIs presents an opportunity for intelligent LCC through semantic segmentation, offering a comprehensive understanding of ground objects. Nonetheless, the heterogeneous appearances of terrains and objects contribute to significant intra-class variance and inter-class similarity at various scales, adding complexity to this task. In response, we introduce SLMFNet, an innovative encoder-decoder segmentation network that adeptly addresses this challenge. To mitigate the sparse and imbalanced distribution of RSIs, we incorporate selective attention modules (SAMs) aimed at enhancing the distinguishability of learned representations by integrating contextual affinities within spatial and channel domains through a compact number of matrix operations. Precisely, the selective position attention module (SPAM) employs spatial pyramid pooling (SPP) to resample feature anchors and compute contextual affinities. In tandem, the selective channel attention module (SCAM) concentrates on capturing channel-wise affinity. Initially, feature maps are aggregated into fewer channels, followed by the generation of pairwise channel attention maps between the aggregated channels and all channels. To harness fine-grained details across multiple scales, we introduce a multi-level feature fusion decoder with data-dependent upsampling (MLFD) to meticulously recover and merge feature maps at diverse scales using a trainable projection matrix. Empirical results on the ISPRS Potsdam and DeepGlobe datasets underscore the superior performance of SLMFNet compared to various state-of-the-art methods. Ablation studies affirm the efficacy and precision of SAMs in the proposed model.


Subject(s)
Remote Sensing Technology , Remote Sensing Technology/methods , Algorithms , Image Processing, Computer-Assisted/methods , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...